conversion process
Fault Localization for Buggy Deep Learning Framework Conversions in Image Recognition
Louloudakis, Nikolaos, Gibson, Perry, Cano, José, Rajan, Ajitha
When deploying Deep Neural Networks (DNNs), developers often convert models from one deep learning framework to another (e.g., TensorFlow to PyTorch). However, this process is error-prone and can impact target model accuracy. To identify the extent of such impact, we perform and briefly present a differential analysis against three DNNs widely used for image recognition (MobileNetV2, ResNet101, and InceptionV3) converted across four well-known deep learning frameworks (PyTorch, Keras, TensorFlow (TF), and TFLite), which revealed numerous model crashes and output label discrepancies of up to 72%. To mitigate such errors, we present a novel approach towards fault localization and repair of buggy deep learning framework conversions, focusing on pre-trained image recognition models. Our technique consists of four stages of analysis: 1) conversion tools, 2) model parameters, 3) model hyperparameters, and 4) graph representation. In addition, we propose various strategies towards fault repair of the faults detected. We implement our technique on top of the Apache TVM deep learning compiler, and we test it by conducting a preliminary fault localization analysis for the conversion of InceptionV3 from TF to TFLite. Our approach detected a fault in a common DNN converter tool, which introduced precision errors in weights, reducing model accuracy. After our fault localization, we repaired the issue, reducing our conversion error to zero.
An Efficient Spiking Neural Network for Recognizing Gestures with a DVS Camera on the Loihi Neuromorphic Processor
Massa, Riccardo, Marchisio, Alberto, Martina, Maurizio, Shafique, Muhammad
Spiking Neural Networks (SNNs), the third generation NNs, have come under the spotlight for machine learning based applications due to their biological plausibility and reduced complexity compared to traditional artificial Deep Neural Networks (DNNs). These SNNs can be implemented with extreme energy efficiency on neuromorphic processors like the Intel Loihi research chip, and fed by event-based sensors, such as DVS cameras. However, DNNs with many layers can achieve relatively high accuracy on image classification and recognition tasks, as the research on learning rules for SNNs for real-world applications is still not mature. The accuracy results for SNNs are typically obtained either by converting the trained DNNs into SNNs, or by directly designing and training SNNs in the spiking domain. Towards the conversion from a DNN to an SNN, we perform a comprehensive analysis of such process, specifically designed for Intel Loihi, showing our methodology for the design of an SNN that achieves nearly the same accuracy results as its corresponding DNN. Towards the usage of the event-based sensors, we design a pre-processing method, evaluated for the DvsGesture dataset, which makes it possible to be used in the DNN domain. Hence, based on the outcome of the first analysis, we train a DNN for the pre-processed DvsGesture dataset, and convert it into the spike domain for its deployment on Intel Loihi, which enables real-time gesture recognition. The results show that our SNN achieves 89.64% classification accuracy and occupies only 37 Loihi cores.
- Europe > Austria > Vienna (0.14)
- North America > Cuba (0.04)
- Europe > United Kingdom > Scotland > City of Glasgow > Glasgow (0.04)
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
Microsoft Excel Uses AI To Let You Insert Data From Picture
Data entry in spreadsheets is not something that many people enjoy doing. It can be more tedious if the data is not coming from a source which allows you to copy/paste, such as a printed receipt. Microsoft wants to take the pain out of that by relying on artificial intelligence to enable Excel to insert data from a picture. Excel's new insert data from picture feature will let you scan a receipt or table-type information such as a recipe into spreadsheets. It will make the whole process as simple as taking a picture.
Efficiently Merging Symbolic Rules into Integrated Rules
Prentzas, Jim (Democritus University of Thrace) | Hatzilygeroudis, Ioannis (University of Patras, Greece)
Neurules are a type of neuro-symbolic rules integrating neurocomputing and production rules. Each neurule is represented as an adaline unit. Neurules exhibit characteristics such as modularity, naturalness and ability to perform interactive and integrated inferences. One way of producing a neurule base is through conversion of an existing symbolic rule base yielding an equivalent but more compact rule base. The conversion process merges symbolic rules having the same conclusion into one or more neurules. Due to the inability of the adaline unit to handle inseparability, more than one neurule for each conclusion may be produced. In this paper, we define criteria concerning the ability or inability to convert a rule set into a single neurule. Definition of criteria determining whether a set of symbolic rules can (or cannot) be converted into a single, equivalent but more compact rule is of general representational interest. With application of such criteria, the conversion process of symbolic rules into neurules becomes more time- and space-efficient by omitting useless trainings. Experimental results are promising.